03. Batch Normalization
Batch Normalization
Batch normalization is a technique for improving the performance and stability of neural networks. The idea is to normalize the layer inputs such that they have a mean of zero and variance of one, much like how we standardize the inputs to networks. Batch normalization is necessary to make DCGANs work.
We've prepared a few notebooks for you to work through that will teach you about batch normalization and how to implement it in TensorFlow. As usual, you can find the notebooks on our GitHub repository in the batch-norm
folder. If you have already cloned the repo, do a git pull
to get the new files. Otherwise, clone the repository:
git clone https://github.com/udacity/deep-learning.git
Or, you can get the notebooks here.
You'll find three notebooks:
Batch_Normalization_Lesson.ipynb
- A notebook showing you how batch normalization worksBatch_Normalization_Exercises.ipynb
- Exercises for you to implement batch normalizationBatch_Normalization_Solutions.ipynb
- Solutions to those exercises